26 research outputs found

    Math empowerment: a multidisciplinary example to engage primary school students in learning mathematics

    Get PDF
    This paper describes an educational project conducted in a primary school in Italy (Scuola Primaria Alessandro Manzoni at Mulazzano, near to Milan). The school requested our collaboration to help improve upon the results achieved on the National Tests for Mathematics, in which students, aged 7, registered performances lower than the national average the past year. From January to June, 2016, we supported teachers, providing them with information, tools and methods to increase their pupils’ curiosity and passion for mathematics. Mixing our different experiences and competences (instructional design and gamification, information technologies and psychology) we have tried to provide a broader spectrum of parameters, tools and keys to understand how to achieve an inclusive approach that is ‘personalised’ to each student. This collaboration with teachers and students allowed us to draw interesting observations about learning styles, pointing out the negative impact that standardized processes and instruments can have on the self‐esteem and, consequently, on student performance. The goal of this programme was to find the right learning levers to intrigue and excite students in mathematical concepts and their applications. Our hypothesis is that, by considering the learning of mathematics as a continuous process, in which students develop freely through their own experiments, observations, involvement and curiosity, students can achieve improved results on the National Tests (INVALSI). This paper includes results of a survey conducted by children ‐’About Me and Mathematics‘

    Chapter Is Three better than Two? A Study on EEG Activity and Imagination Abilities in 2D vs 3D Stimuli

    Get PDF
    Real and virtual are often considered terms in reciprocal opposition, but the boundaries between the two are blurred. The main goal of our study consists in answering the question whether the presence of a third dimension (3D) is a fundamental step of the virtual toward the real world, and if it causes some difference in the neural activity of the spectator [8]. Also, the possibility to consider real what is virtual will be discussed [6, 7]

    Bio-molecular diagnosis through Random Subspace Ensembles of Learning Machines.

    Get PDF
    Traditional clinical diagnostic approaches may sometimes fail in detecting tumors (Alizadeh et al. 2001). Several results showed that bio-molecular analysis of malignancies may help to better characterize malignancies (e.g. gene expression profiling). Information for supporting both diagnosis and prognosis of malignancies at bio-molecular level may be obtained from high-throughput biotechnologies (e.g. DNA microarray). Recent work on unsupervised analysis of complex bio-molecular data (Bertoni and Valentini, 2006) showed that random projections obeying the Johnson-Lindenstrauss lemma can be used for: \u2013 Discovering structures in bio-molecular data \u2013 Validating clustering results \u2013 Improving clustering results RS ensembles can improve the accuracy of biomolecular diagnosis characterized by very high dimensional data. They could be also easily applied to heterogeneous bio-molecular and clinical data. A new promising approach consists in combining state of the art feature (gene) selection methods and RS ensembles. RS ensembles are computationally intensive but can be easily parallelized using clusters of workstations (e.g. in a MPI framework)

    How many tests do you need to diagnose Learning Disabilities?

    Get PDF
    The diagnosis of Learning Disabilities (LD) is frequently subject to cognitive biases. In Italy, minimal diagnostic standards have been identified during a national Consensus Conference (2010). However, specialists use different protocols to assess reading and cognitive abilities. Thus, we propose to support LDs diagnosis with Artificial Neural Networks (ANN). Clinical results from 203 reports were input to investigate which ones can predict LD diagnosis. In addition, correlations among LDs were explored. Preliminary results show that ANNs can be useful to support a clinical diagnosis of LDs with an 81.93% average accuracy, and, under certain conditions, with a 99% certainty. Additionally, the 10 most meaningful tests for each LD have been identified and significant correlations between dyscalculia and dyslexia were found

    Negotiating the Web Science Curriculum through Shared Educational Artefacts

    No full text
    EXTENDED ABSTRACT The far-reaching impact of Web on society is widely recognised and acknowledged. The interdisciplinary study of this impact has crystallised in the field of study known as Web Science. However, defining an agreed, shared understanding of what constitutes Web Science requires complex negotiation and translations of understandings across component disciplines, national cultures and educational traditions. Some individual institutions have already established particular curricula, and discussions in the Web Science Curriculum Workshop series have marked the territory to some extent. This paper reports on a process being adopted across a consortium of partners to systematically create a shared understanding of what constitutes Web Science. It records and critiques the processes instantiated to agree a common curriculum, and presents a framework for future discussion and development. The need to study the Web in its complexity, development and impact led to the creation of Web Science. Web Science is inherently interdisciplinary. Its goal is to: a) understand the Web growth mechanisms; b) create approaches that allow new powerful and more beneficial mechanisms to occur. Teaching Web Science is a unique experience since the emerging discipline is a combination of two essential features. On one hand, the analysis of microscopic laws extrapolated to the macroscopic realm generates observed behaviour. On the other hand languages and algorithms on the Web are built in order to produce novel desired computer behaviour that should be put in context. Finding a suitable curriculum that is different from the study of language, algorithms, interaction patterns and business processes is thus an important and challenging task for the simple reason that we believe that the future of sociotechnical systems will be in their innovative power (inventing new ways to solve problems), rather than their capacity to optimize current practices. The Web Science Curriculum Development (WSCD) Project focuses European expertise in this interdisciplinary endeavour with the ultimate aim of designing a joint masters program for Web Science between the partner universities. The process of curriculum definition is being addressed using a negotiation process which mirrors the web science and engineering approach described by Berners-Lee (figure 1 below). The process starts on the engineering side (right). From the technical design point of view the consortium is creating an open repository of shared educational artefacts using EdShare [1] (based on EPrints) to collect or reference the whole range of educational resources being used in our various programmes. Socially, these resources will be annotated against a curriculum categorization [2] which in itself is subject to negotiation and change, currently via a wiki. This last process is represented by complexity and collaboration at the bottom of the diagram. The resources necessarily extend beyond artefacts used in the lecture and seminar room encompassing artefacts associated with the administrative and organisational processes which are necessary to assure the comparability of the educational resources and underwrite the quality standards of the associated awards. Figure 1: Web Science and Engineering Approach (e.g. See http://www.w3.org/2007/Talks/0314-soton-tbl/#%2811%29) From the social point of view the contributions will be discussed and peer reviewed by members of the consortium. Our intention is that by sharing the individual components of the teaching and educational process and quality assuring them by peer review we will provide concrete examples of our understanding of the discipline. However, as Berners-Lee observes, it is in the move from the micro to the macro that the magic (complexity) is involved. The challenge for our consortium, once our community repository is adequately populated, is to involve the wider community in the contribution, discussion and annotation that will lead to the evolution of a negotiated and agreed but evolving curriculum for Web Science. Others have worked on using community approaches to developing curriculum. For example, in the Computer Science community there is a repository of existing syllabi [3] that enables designers of new courses to understand how others have approached the problem, and the Information Science community is using a wiki [4] to enable the whole community to contribute to the dynamic development of the curriculum. What makes this project unique is that rather than taking a top down structured approach to curriculum definition it takes a bottom up approach, using the actual teaching materials as the basis on which to iteratively negotiate and refine the definition of the curriculum

    Negotiating the Web Science Curriculum through Shared Educational Artefacts

    Get PDF
    EXTENDED ABSTRACT The far-reaching impact of Web on society is widely recognised and acknowledged. The interdisciplinary study of this impact has crystallised in the field of study known as Web Science. However, defining an agreed, shared understanding of what constitutes Web Science requires complex negotiation and translations of understandings across component disciplines, national cultures and educational traditions. Some individual institutions have already established particular curricula, and discussions in the Web Science Curriculum Workshop series have marked the territory to some extent. This paper reports on a process being adopted across a consortium of partners to systematically create a shared understanding of what constitutes Web Science. It records and critiques the processes instantiated to agree a common curriculum, and presents a framework for future discussion and development. The need to study the Web in its complexity, development and impact led to the creation of Web Science. Web Science is inherently interdisciplinary. Its goal is to: a) understand the Web growth mechanisms; b) create approaches that allow new powerful and more beneficial mechanisms to occur. Teaching Web Science is a unique experience since the emerging discipline is a combination of two essential features. On one hand, the analysis of microscopic laws extrapolated to the macroscopic realm generates observed behaviour. On the other hand languages and algorithms on the Web are built in order to produce novel desired computer behaviour that should be put in context. Finding a suitable curriculum that is different from the study of language, algorithms, interaction patterns and business processes is thus an important and challenging task for the simple reason that we believe that the future of sociotechnical systems will be in their innovative power (inventing new ways to solve problems), rather than their capacity to optimize current practices. The Web Science Curriculum Development (WSCD) Project focuses European expertise in this interdisciplinary endeavour with the ultimate aim of designing a joint masters program for Web Science between the partner universities. The process of curriculum definition is being addressed using a negotiation process which mirrors the web science and engineering approach described by Berners-Lee (figure 1 below). The process starts on the engineering side (right). From the technical design point of view the consortium is creating an open repository of shared educational artefacts using EdShare [1] (based on EPrints) to collect or reference the whole range of educational resources being used in our various programmes. Socially, these resources will be annotated against a curriculum categorization [2] which in itself is subject to negotiation and change, currently via a wiki. This last process is represented by complexity and collaboration at the bottom of the diagram. The resources necessarily extend beyond artefacts used in the lecture and seminar room encompassing artefacts associated with the administrative and organisational processes which are necessary to assure the comparability of the educational resources and underwrite the quality standards of the associated awards. Figure 1: Web Science and Engineering Approach (e.g. See http://www.w3.org/2007/Talks/0314-soton-tbl/#%2811%29) From the social point of view the contributions will be discussed and peer reviewed by members of the consortium. Our intention is that by sharing the individual components of the teaching and educational process and quality assuring them by peer review we will provide concrete examples of our understanding of the discipline. However, as Berners-Lee observes, it is in the move from the micro to the macro that the magic (complexity) is involved. The challenge for our consortium, once our community repository is adequately populated, is to involve the wider community in the contribution, discussion and annotation that will lead to the evolution of a negotiated and agreed but evolving curriculum for Web Science. Others have worked on using community approaches to developing curriculum. For example, in the Computer Science community there is a repository of existing syllabi [3] that enables designers of new courses to understand how others have approached the problem, and the Information Science community is using a wiki [4] to enable the whole community to contribute to the dynamic development of the curriculum. What makes this project unique is that rather than taking a top down structured approach to curriculum definition it takes a bottom up approach, using the actual teaching materials as the basis on which to iteratively negotiate and refine the definition of the curriculum

    The arsenic in mice as experimental model for risk modifiers.

    Get PDF
    Studies on the relevance of host factors in modulating the physiological responses following chronic exposure to xenobiotics were carried out according to a \u201cToxicogenomic Model on Arsenic in Mice\u201d developed at thte JRC. This model is focused on chronic exposure to arsenate given alone or in combination with other xenobiotics, to assess potential \u201ccocktail effects\u201d and related cumulative risks. DNA-macroarrays technology is applied to evaluate physiological responses at transcriptional level and assessing possible biochemical responses. A cluster of 1200 cancer genes was used for screening purposes, while quantitative PCR on selected genes applied for validation. The exposure varied from in-utero and post-lactation up to adult age (4 months), the chemical forms (arsenate and dimethylarsenate) and doses from 0.1 up to 10 mg As/L in drinking water. Comparison between acute single doses and chronic exposure was also performed. Chronic exposure to arsenate and atrazine in drinking water was selected as an example of multiple chronic exposure. The liver, kidney, lung, bone marrow, adrenals, uterus, and testis were the tissues considered. In the tissues of mice chronically exposed to arsenate, the modulation of gene expression was not only depending on the levels, types and length of exposure, while differently regulated also by the sex, age and diet. The main gene functional families modulated were covering a wide range of biochemical and physiological regulations, like cell cycle modulation, cell adhesion, apoptosis, xenobiotic metabolism, DNA repair, protein turnover, and proto-oncogenes. The patterns of gene expression were strongly influenced by co-exposure to other xenobiotics like atrazine and naphthalene, particularly for genes involved in the metabolism and in neuroendocrine regulation. These effects varied according to the tissue considered, supporting the needs for coherent and specifically designed studies to assess relevant biomarkers of long-term exposure to low levels of xenobiotics and their mixtures

    Dietary proteins modulates the gene expression in mice chronically exposed to arsenate.

    Get PDF
    In the frame of a project on the assessment of risk modifying factors modulating the health effects of environmental chemicals we are developing a toxicogenomic approach using an \u201carsenic in mice\u201d experimental model, considering multistressors exposure, genetics, age, levels and length of exposure, etc. In the present study, we used cDNA Macroarrays to investigate the effects of low protein intake on the expression of 1185 cancer-related genes in the liver of male and female mice transplacentary exposed to different levels of arsenate in drinking water during gestation and developmental age. The results of this study support the relevance of dietary factors in modulating the physiological responses in gene expression following chronic exposure to xenobiotics. In mice chronically exposed to arsenate in drinking water, the modulation of gene expression in different tissues was not only depending on the levels of the xenobiotic under investigation, but mainly regulated by the content of proteins in diet

    Application of Multivariate Analysis, Support Vector Machines and Artificial Neural Network to the Processing of Nuclear Magnetic Resonance data of olive oil and fish oil samples for classification of geographic origin and discrimination between wild and farm fish.

    Get PDF
    Motivations Traceability and control of origin of food products are very important for the Consumers and for the European enforcement laboratories. For instance, The high added value of olive oil makes its control an important goal for EU producers and consumers. There is thus a need in developing analytical methods to ensure compliance with labeling, i.e.the control of geographical origin giving also support to the denominated protected origin (DPO) policy, and the determination of the genuineness of the product by the detection of eventual adulterations. Futhermore , EU regulations requires that origin, wild or farmed as well as geographic origin, of fish sold on the retail market be available to the consumers. Modern analytical techniques such as Nuclear Magnetic Resonance (NMR) provide very informative data on the composition in fatty acids and in other constituents of vegetable oils and fish oils. The combination of 1H NMR fingerprinting with multivariate analysis provides an original approach to study the profile of these oils in relation with geographical origin of olive oil or for discrimination between wild or farm origin for fish like salmons. Methods Concerning the experiment on fish oil, we used Support vector machines (SVMs) as a novel learning machine in the authentication of the origin of salmon. SVMs have the advantage of relying on a well-developed theory and have already proved to be successful in a number of practical applications. The method requires a very simple sample preparation of the fish oils extracted from the white muscle of salmon samples. Multivariate (chemometric) techniques are able to filter out the most relevant information from a spectrum, e.g. for a classification. In the experiment on olive oil samples, the principal component analysis (PCA) was carried out on the ~12,000 variables (chemical shifts) and four data sets were defined prior to PCA. Linear discriminant analysis (LDA) of the first 50 PC\u2019s was applied for classification of olive oil samples according to the geographic origin and year of production. The data analysis has been carried out with and without outliers, as well. Variable selection for LDA was achieved using: (i) the best five variables and (ii) an interactive forward stepwise manner. Results The use of SVMs for the discrimination between wild and farm salmon provides a new and effective method that eliminates the possibility of fraud through misrepresentation of the country of origin of salmon. The SVM has been able to distinguish correctly between the wild and farmed salmon; however ca. 5% of the country of origins were misclassified. Using LDA on the external validation sets the correct classification of olive oil varied between 47 and 75% (random selection), and between 35 and 92% (Kennard\u2013Stone selection (KS)) depending on geographic origin (country) and production years. A similar success rate could be achieved using partial least squares discriminant analysis (PLS DA). The success rate can be considerably improved by using probabilistic neural networks (PNN). Correct classification by PNN varied between 58 and 100% on the external validation sets. Other chemometric techniques, such as multiple linear regression, or generalized pair-wise correlation, did not give better results. Acknowledgements The authors are grateful to the Europeanproject COFAWS (European Commission DG RTD FP5 project GRD2\u20132000\u201331813) and to all the collaborators from the partners of this project (Eurofins Scientific (Nantes- France), North Atlantic Fisheries College (Scalloway, Shetland Islands - United Kingdom), SINTEF Fisheries and Aquaculture (Trondheim-Norway), Joint Research Centre (Ispra-Italy)) who contributed to the collection and preparation of fish samples, and for the authorization to exploit their NMR data in this work

    Sex as a major determinant of gene expression in tissues of mice exposed to arsenate.

    Get PDF
    Inorganic arsenic, frequently found as contaminant of ground water used for drinking purposes in many areas of the world, is a well-known potent human toxicant and carcinogen. Chronic exposure to inorganic arsenic has been associated with cancer of skin, lung, bladder and kidney and, probably, liver. The mechanism of arsenic action in vivo is poorly understood, in particular in relation to dose, type of tissue and gender. To elucidate tissue- and gender dependent biological responses in the genome of mice, we have used cDNA macroarrays for investigation on the expression of 1185 cancer-related genes in mice after exposure to arsenate in drinking water. Continuous exposures of mice to arsenate in drinking water modulate the gene expression in tissues. Interestingly, there were remarkable sex differences: male and female mice show completely different changes in the expression of cancer-related genes. The main gene functional families modulated, were covering a wide range of biochemical and physiological regulations, like cell cycle modulation, cell adhesion, apoptosis, xenobiotic metabolism, DNA repair, protein turnover and proto-oncogens. This result demonstrates important gene-environmental interactions: the molecular mechanisms triggered by arsenic levels frequently experienced following exposure via drinking water, are totally different in males and females. The results obtained using cancer-related genes will be compared with the profiles of over 30.000 genes using the Applied Biosystems expression Array System, to clarify the sex-specific gene pathways
    corecore